16 research outputs found

    Integrating Incidence Angle Dependencies Into the Clustering-Based Segmentation of SAR Images

    Get PDF
    Synthetic aperture radar systems perform signal acquisition under varying incidence angles and register an implicit intensity decay from near to far range. Owing to the geometrical interaction between microwaves and the imaged targets, the rates at which intensities decay depend on the nature of the targets, thus rendering single-rate image correction approaches only partially successful. The decay, also known as the incidence angle effect, impacts the segmentation of wide-swath images performed on absolute intensity values. We propose to integrate the target-specific intensity decay rates into a nonstationary statistical model, for use in a fully automatic and unsupervised segmentation algorithm. We demonstrate this concept by assuming Gaussian distributed log-intensities and linear decay rates, a fitting approximation for the smooth systematic decay observed for extended flat targets. The segmentation is performed on Sentinel-1, Radarsat-2, and UAVSAR wide-swath scenes containing open water, sea ice, and oil slicks. As a result, we obtain segments connected throughout the entire incidence angle range, thus overcoming the limitations of modeling that does not account for different per-target decays. The model simplicity also allows for short execution times and presents the segmentation approach as a potential operational algorithm. In addition, we estimate the log-linear decay rates and examine their potential for a physical interpretation of the segments

    Integrating incidence angle dependencies into the clustering-based segmentation of SAR images

    Get PDF
    Synthetic aperture radar systems perform signal acquisition under varying incidence angles and register an implicit intensity decay from near to far range. Owing to the geometrical interaction between microwaves and the imaged targets, the rates at which intensities decay depend on the nature of the targets, thus rendering single-rate image correction approaches only partially successful. The decay, also known as the incidence angle effect, impacts the segmentation of wide-swath images performed on absolute intensity values. We propose to integrate the target-specific intensity decay rates into a nonstationary statistical model, for use in a fully automatic and unsupervised segmentation algorithm. We demonstrate this concept by assuming Gaussian distributed log-intensities and linear decay rates, a fitting approximation for the smooth systematic decay observed for extended flat targets. The segmentation is performed on Sentinel-1, Radarsat-2, and UAVSAR wide-swath scenes containing open water, sea ice, and oil slicks. As a result, we obtain segments connected throughout the entire incidence angle range, thus overcoming the limitations of modeling that does not account for different per-target decays. The model simplicity also allows for short execution times and presents the segmentation approach as a potential operational algorithm. In addition, we estimate the log-linear decay rates and examine their potential for a physical interpretation of the segments

    Deep learning-based 2D/3D registration of an atlas to biplanar X-ray images

    No full text
    Purpose The registration of a 3D atlas image to 2D radiographs enables 3D pre-operative planning without the need to acquire costly and high-dose CT-scans. Recently, many deep-learning-based 2D/3D registration methods have been proposed which tackle the problem as a reconstruction by regressing the 3D image immediately from the radiographs, rather than registering an atlas image. Consequently, they are less constrained against unfeasible reconstructions and have no possibility to warp auxiliary data. Finally, they are, by construction, limited to orthogonal projections. Methods We propose a novel end-to-end trainable 2D/3D registration network that regresses a dense deformation field that warps an atlas image such that the forward projection of the warped atlas matches the input 2D radiographs. We effectively take the projection matrix into account in the regression problem by integrating a projective and inverse projective spatial transform layer into the network. Results Comprehensive experiments conducted on simulated DRRs from patient CT images demonstrate the efficacy of the network. Our network yields an average Dice score of 0.94 and an average symmetric surface distance of 0.84 mm on our test dataset. It has experimentally been determined that projection geometries with 80 degrees to 100 degrees projection angle difference result in the highest accuracy. Conclusion Our network is able to accurately reconstruct patient-specific CT-images from a pair of near-orthogonal calibrated radiographs by regressing a deformation field that warps an atlas image or any other auxiliary data. Our method is not constrained to orthogonal projections, increasing its applicability in medical practices. It remains a future task to extend the network for uncalibrated radiographs

    Deep learning-based 2D/3D registration of an atlas to biplanar X-ray images

    No full text
    Purpose The registration of a 3D atlas image to 2D radiographs enables 3D pre-operative planning without the need to acquire costly and high-dose CT-scans. Recently, many deep-learning-based 2D/3D registration methods have been proposed which tackle the problem as a reconstruction by regressing the 3D image immediately from the radiographs, rather than registering an atlas image. Consequently, they are less constrained against unfeasible reconstructions and have no possibility to warp auxiliary data. Finally, they are, by construction, limited to orthogonal projections. Methods We propose a novel end-to-end trainable 2D/3D registration network that regresses a dense deformation field that warps an atlas image such that the forward projection of the warped atlas matches the input 2D radiographs. We effectively take the projection matrix into account in the regression problem by integrating a projective and inverse projective spatial transform layer into the network. Results Comprehensive experiments conducted on simulated DRRs from patient CT images demonstrate the efficacy of the network. Our network yields an average Dice score of 0.94 and an average symmetric surface distance of 0.84 mm on our test dataset. It has experimentally been determined that projection geometries with 80 degrees to 100 degrees projection angle difference result in the highest accuracy. Conclusion Our network is able to accurately reconstruct patient-specific CT-images from a pair of near-orthogonal calibrated radiographs by regressing a deformation field that warps an atlas image or any other auxiliary data. Our method is not constrained to orthogonal projections, increasing its applicability in medical practices. It remains a future task to extend the network for uncalibrated radiographs
    corecore